29 research outputs found

    The cerebellum could solve the motor error problem through error increase prediction

    Get PDF
    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error problem. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.Comment: 34 pages (without bibliography), 13 figure

    Learning and generation of temporal sequences in the neocortex

    Get PDF
    The temporal structure of neuronal activity plays a fundamental role in brain function. In addition to the compelling structure found in birdsong, repeating temporal sequences have been experimentally observed in the mammalian neocortex, both at the levels of local field potentials and individual neurons.\ud \ud The mechanisms underlying the learning and generation of temporal sequences are currently unknown. An attractive idea is that time-asymmetric Hebbian mechanisms capture the temporal structure of afferent signals by selectively strengthening the connections between sequentially activated neuronal populations. We explore some consequences of this idea using a simplified model of neocortex.\ud \ud Our model uses excitatory and inhibitory firing rate variables, along with adaptation and time-asymmetric Hebbian plasticity to create a versatile pattern generator which can store and reconstruct input sequences. We study several related properties of this model, mainly: 1) the formation of intersecting and complex sequences, 2) how the structure in the connection matrix affects the dynamics of the system and the symmetries observed in the activity of the network, 3) pathological behaviors due to abnormalities in plasticity and inhibition; the possible relation with epilepsy

    A differential Hebbian framework for biologically-plausible motor control

    Get PDF
    In the realm of motor control, artificial agents cannot match the performance of their biological counterparts. We thus explore a neural control architecture that is both biologically plausible, and capable of fully autonomous learning. The architecture consists of feedback controllers that learn to achieve a desired state by selecting the errors that should drive them. This selection happens through a family of differential Hebbian learning rules that, through interaction with the environment, can learn to control systems where the error responds monotonically to the control signal. We next show that in a more general case, neural reinforcement learning can be coupled with a feedback controller to reduce errors that arise non-monotonically from the control signal. The use of feedback control reduces the complexity of the reinforcement learning problem, because only a desired value must be learned, with the controller handling the details of how it is reached. This makes the function to be learned simpler, potentially allowing to learn more complex actions. We discuss how this approach could be extended to hierarchical architectures.Comment: 35 pages, 10 figures. Appendix: 9 pages, 2 figure

    WORKING MEMORY AND TEMPORAL PATTERNS: THE IMPLICATIONS OF NEURAL POPULATIONS

    Get PDF
    Working memory, the ability to temporarily retain information which will be used to guidesubsequent behavior, is a central component of our cognitive abilities. Almost 40 years ago,electrophysiological experiments in monkeys established that persistent activity may be theneuronal substrate of working memory. Many computational models have been proposedin order to explain persistent activity, with recurrent connections playing a prominent rolein many of these. No model, however, has captured all the important features in workingmemory networks. This work presents three related models which seek to understand someof these features. In particular, the first model explores the formation of firing rate patternsduring the delay period of working memory tasks; the second model explores the dynamicsof working memory networks with reduced inhibition, and their possible role in epilepsy;the third model explores the formation of temporal sequences and closed loops of activity.The three models assume the existence of densely connected neural populations (such asminicolumns), which respond similarly to the same stimuli. Another common feature is therole of dynamic synapses: the first two models rely on synaptic facilitation, whereas thethird uses temporally asymmetric Hebbian plasticity

    Draculab: A Python Simulator for Firing Rate Neural Networks With Delayed Adaptive Connections

    Get PDF
    Draculab is a neural simulator with a particular use scenario: firing rate units with delayed connections, using custom-made unit and synapse models, possibly controlling simulated physical systems. Draculab also has a particular design philosophy. It aims to blur the line between users and developers. Three factors help to achieve this: a simple design using Python's data structures, extensive use of standard libraries, and profusely commented source code. This paper is an introduction to Draculab's architecture and philosophy. After presenting some example networks it explains basic algorithms and data structures that constitute the essence of this approach. The relation with other simulators is discussed, as well as the reasons why connection delays and interaction with simulated physical systems are emphasized

    Working Memory Cells' Behavior May Be Explained by Cross-Regional Networks with Synaptic Facilitation

    Get PDF
    Neurons in the cortex exhibit a number of patterns that correlate with working memory. Specifically, averaged across trials of working memory tasks, neurons exhibit different firing rate patterns during the delay of those tasks. These patterns include: 1) persistent fixed-frequency elevated rates above baseline, 2) elevated rates that decay throughout the tasks memory period, 3) rates that accelerate throughout the delay, and 4) patterns of inhibited firing (below baseline) analogous to each of the preceding excitatory patterns. Persistent elevated rate patterns are believed to be the neural correlate of working memory retention and preparation for execution of behavioral/motor responses as required in working memory tasks. Models have proposed that such activity corresponds to stable attractors in cortical neural networks with fixed synaptic weights. However, the variability in patterned behavior and the firing statistics of real neurons across the entire range of those behaviors across and within trials of working memory tasks are typical not reproduced. Here we examine the effect of dynamic synapses and network architectures with multiple cortical areas on the states and dynamics of working memory networks. The analysis indicates that the multiple pattern types exhibited by cells in working memory networks are inherent in networks with dynamic synapses, and that the variability and firing statistics in such networks with distributed architectures agree with that observed in the cortex

    Self-configuring feedback loops for sensorimotor control

    No full text
    How dynamic interactions between nervous system regions in mammals performs online motor control remains an unsolved problem. In this paper, we show that feedback control is a simple, yet powerful way to understand the neural dynamics of sensorimotor control. We make our case using a minimal model comprising spinal cord, sensory and motor cortex, coupled by long connections that are plastic. It succeeds in learning how to perform reaching movements of a planar arm with 6 muscles in several directions from scratch. The model satisfies biological plausibility constraints, like neural implementation, transmission delays, local synaptic learning and continuous online learning. Using differential Hebbian plasticity the model can go from motor babbling to reaching arbitrary targets in less than 10 min of in silico time. Moreover, independently of the learning mechanism, properly configured feedback control has many emergent properties: neural populations in motor cortex show directional tuning and oscillatory dynamics, the spinal cord creates convergent force fields that add linearly, and movements are ataxic (as in a motor system without a cerebellum)
    corecore